perm filename REIFY[S80,JMC] blob
sn#523205 filedate 1980-07-17 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .cb On Reification
C00007 00003 Unending reification
C00010 ENDMK
C⊗;
.cb On Reification
When we formalize fragments of common sense knowledge in
first order logic, we treat a the property of redness by introducing
a unary predicate ⊗red(x) or in some formalizations ⊗red(x,s) where
the variable ⊗s represents a situation. We say that we ⊗reify the
predicate ⊗red by introducing ⊗redness as an entity. The word "re" is
Latin for "thing" and to ⊗reify a concept is to make a thing out of
it. Synonymous with ⊗red(x) is ⊗has(x,redness), and we may similarly
write ⊗has(x,redness,s) for ⊗red(x,s). Philosophers of the nominalist
persuasion and positivistically inclined scientists tend to deplore
words like "redness". They don't see any advantage in ⊗has(x,redness) over
⊗red(x), and they fear that letting entities like redness into
the ontology (i.e. the set of entities that are considered to exist)
will lead to meaningless statements.
If all we want to say is ⊗red(x) for various objects ⊗x, there
is indeed no need to use a concept of redness.
color(x,s) = red
However, if we want to quantify over colors or properties in order
to derive statements about redness from general statements about
colors or properties, then we need a constant in the language to
substitute for the variable. While we have taken redness as an
example, there are many more potential uses of reification in
AI.
We can regard redness as an object which may or may not
be present in the language. More interesting are objects introduced
by a reasoning program to serve particular purposes. For example,
if a boat is non-functional, we can ask "what is wrong with boat?"
and later after some "things wrong with the boat" have been fixed,
we may ask "Is there anything else wrong with the boat?".
We may ask "What is the connection between Senator McGovern's recent
hawkish statements on Iran and the fact that he is now running
for re-election?".
The goal toward which this note hopes to make some progress
is that of giving mechanisms for ad hoc reifications like "things
wrong with the boat" and "connections between statements and facts".
More common reifications are reasons for decisions and causes of
events.
The paradigm cases of causes occur when some event like the McGovern
statement seems anomalous, and one asks for the cause. The cause
itself should be taken as the fact that will satisfy a seeker of
a cause. Causes may therefore be definable only relative to the
intellectual situation of a person or program seeking a cause.
the facts about how cars work
the facts of embryology
what we don't know about AI
Unending reification
Previously it seemed that we would have objects like
⊗on(Cat,Mat)
and a predicate ⊗true allowing us to write
%2true(on(Cat,Mat),context)%1.
Now we also allow
%2truth(on(Cat,Mat),Context)%1
which can in turn be true or false or have some other value in a context.
The idea is to take no context as final.
We have
aspects of the world (e.g. Fido) having infinite unknown details
concepts of which have many properties but which are semi-linguistic
small finite domains like truth values
A key form of reasoning involve exhausting the possibilities, but that
can be done only in a small finite domain. Thus consider
⊗location(Fido). This has infinite detail to it, since Fido is
continuously movable. However, we often need to reason exhaustively about
Fido's location - either he is in the kitchen or he is in the rest of
the house or he is outside. We could do this with ⊗location1(Fido) or
with ⊗room(location(Fido)), and either of these might involve a context
(or index following Scott's %2Advice on Modal Logic%1). We might even
allow ⊗location1(Fido) to have a few values ordinarily (i.e. after
a bit of non-monotonic default reasoning) but to take a more complex
value in unusual circumstances.